16 research outputs found

    Adversarial Networks for Spatial Context-Aware Spectral Image Reconstruction from RGB

    Get PDF
    Hyperspectral signal reconstruction aims at recovering the original spectral input that produced a certain trichromatic (RGB) response from a capturing device or observer. Given the heavily underconstrained, non-linear nature of the problem, traditional techniques leverage different statistical properties of the spectral signal in order to build informative priors from real world object reflectances for constructing such RGB to spectral signal mapping. However, most of them treat each sample independently, and thus do not benefit from the contextual information that the spatial dimensions can provide. We pose hyperspectral natural image reconstruction as an image to image mapping learning problem, and apply a conditional generative adversarial framework to help capture spatial semantics. This is the first time Convolutional Neural Networks -and, particularly, Generative Adversarial Networks- are used to solve this task. Quantitative evaluation shows a Root Mean Squared Error (RMSE) drop of 44.7% and a Relative RMSE drop of 47.0% on the ICVL natural hyperspectral image dataset

    Crop conditional Convolutional Neural Networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions

    Get PDF
    Convolutional Neural Networks (CNN) have demonstrated their capabilities on the agronomical field, especially for plant visual symptoms assessment. As these models grow both in the number of training images and in the number of supported crops and diseases, there exist the dichotomy of (1) generating smaller models for specific crop or, (2) to generate a unique multi-crop model in a much more complex task (especially at early disease stages) but with the benefit of the entire multiple crop image dataset variability to enrich image feature description learning. In this work we first introduce a challenging dataset of more than one hundred-thousand images taken by cell phone in real field wild conditions. This dataset contains almost equally distributed disease stages of seventeen diseases and five crops (wheat, barley, corn, rice and rape-seed) where several diseases can be present on the same picture. When applying existing state of the art deep neural network methods to validate the two hypothesised approaches, we obtained a balanced accuracy (BAC=0.92) when generating the smaller crop specific models and a balanced accuracy (BAC=0.93) when generating a single multi-crop model. In this work, we propose three different CNN architectures that incorporate contextual non-image meta-data such as crop information onto an image based Convolutional Neural Network. This combines the advantages of simultaneously learning from the entire multi-crop dataset while reducing the complexity of the disease classification tasks. The crop-conditional plant disease classification network that incorporates the contextual information by concatenation at the embedding vector level obtains a balanced accuracy of 0.98 improving all previous methods and removing 71% of the miss-classifications of the former methods

    A Probabilistic Model and Capturing Device for Remote Simultaneous Estimation of Spectral Emissivity and Temperature of Hot Emissive Materials

    Get PDF
    Estimating the temperature of hot emissive samples (e.g. liquid slag) in the context of harsh industrial environments such as steelmaking plants is a crucial yet challenging task, which is typically addressed by means of methods that require physical contact. Current remote methods require information on the emissivity of the sample. However, the spectral emissivity is dependent on the sample composition and temperature itself, and it is hardly measurable unless under controlled laboratory procedures. In this work, we present a portable device and associated probabilistic model that can simultaneously produce quasi real-time estimates for temperature and spectral emissivity of hot samples in the [0.2, 12.0μm ] range at distances of up to 20m . The model is robust against variable atmospheric conditions, and the device is presented together with a quick calibration procedure that allows for in field deployment in rough industrial environments, thus enabling in line measurements. We validate the temperature and emissivity estimates by our device against laboratory equipment under controlled conditions in the [550, 850∘C ] temperature range for two solid samples with well characterized spectral emissivity’s: alumina ( α−Al2O3 ) and hexagonal boron nitride ( h−BN ). The analysis of the results yields Root Mean Squared Errors of 32.3∘C and 5.7∘C respectively, and well correlated spectral emissivity’s.This work was supported in part by the Basque Government (Hazitek AURRERA B: Advanced and Useful REdesign of CSP process for new steel gRAdes) under Grant ZE-2017/00009

    Deep convolutional neural networks for mobile capture device-based crop disease classification in the wild

    Get PDF
    Fungal infection represents up to 50% of yield losses, making it necessary to apply effective and cost efficient fungicide treatments, whose efficacy depends on infestation type, situation and time. In these cases, a correct and early identification of the specific infection is mandatory to minimize yield losses and increase the efficacy and efficiency of the treatments. Over the last years, a number of image analysis-based methodologies have been proposed for automatic image disease identification. Among these methods, the use of Deep Convolutional Neural Networks (CNNs) has proven tremendously successful for different visual classification tasks. In this work we extend previous work by Johannes et al. (2017) with an adapted Deep Residual Neural Network-based algorithm to deal with the detection of multiple plant diseases in real acquisition conditions where different adaptions for early disease detection have been proposed. This work analyses the performance of early identification of three relevant European endemic wheat diseases: Septoria (Septoria triciti), Tan Spot (Drechslera triciti-repentis) and Rust (Puccinia striiformis & Puccinia recondita)

    Automatic plant disease diagnosis using mobile capture devices, applied on a wheat use case

    Get PDF
    Disease diagnosis based on the detection of early symptoms is a usual threshold taken into account for integrated pest management strategies. Early phytosanitary treatment minimizes yield losses and increases the efficacy and efficiency of the treatments. However, the appearance of new diseases associated to new resistant crop variants complicates their early identification delaying the application of the appropriate corrective actions. The use of image based automated identification systems can leverage early detection of diseases among farmers and technicians but they perform poorly under real field conditions using mobile devices. A novel image processing algorithm based on candidate hot-spot detection in combination with statistical inference methods is proposed to tackle disease identification in wild conditions. This work analyses the performance of early identification of three European endemic wheat diseases – septoria, rust and tan spot. The analysis was done using 7 mobile devices and more than 3500 images captured in two pilot sites in Spain and Germany during 2014, 2015 and 2016. Obtained results reveal AuC (Area under the Receiver Operating Characteristic –ROC– Curve) metrics higher than 0.80 for all the analyzed diseases on the pilot tests under real conditions

    Detección de fibrilación ventricular mediante técnicas de aprendizaje profundo

    Get PDF
    Detección de fibrilación ventricular mediante técnicas de aprendizaje profundo La detección de arritmias ventriculares, en particular la fibrilación ventricular (FV), es parte fundamental de los algoritmos de clasificación de arritmias de los desfibriladores. Dichos algoritmos deciden si administrar la descarga de desfibrilación, para lo que clasifican los ritmos en desfibrilables (Sh) o no desfibrilables (NSh). Este trabajo propone un nuevo abordaje para la clasificación Sh/NSh de ritmos basado en un sistema de aprendizaje profundo. Para el trabajo se emplearon tres bases de datos públicas de la plataforma Physionet (CUDB, VFDB y AHADB), y se extrajeron segmentos de 4 y 8 segundos. Se anotaron los segmentos como Sh y NSh en base a las anotaciones de las bases de datos, que fueron auditadas por expertos. Los datos se dividieron por paciente en 80% para desarrollar los algoritmos y 20% para evaluación. El sistema de aprendizaje profundo emplea dos etapas convolucionales seguidas de, una red longshort- term-memory y una etapa final de clasificación basada en red neuronal. A modo de referencia se optimizó un clasificador SVM basado en las características de detección de arritmias ventriculares más eficientes publicadas en la literatura. Se calculó la sensibilidad (Se), ritmos desfibrilables, especificidad (Sp), ritmos no desfibrilables, y la precisión (Acc). El método de aprendizaje profundo proporcionó Se, Sp y Acc de 98.5%, 99.4% y 99.2% para segmentos de 4 segundos y 99.7%, 98.9%, 99.1% para segmentos de 8 segundos. El algoritmo permite detectar FV de forma fiable con segmentos de 4 segundos, corrigiendo un 30% de los errores del método basado en SVM.Este trabajo ha sido financiado por el Ministerio de Economía y Competitividad mediante el proyecto TEC2015-64678R junto con el Fondo Europeo de Desarrollo Regional (FEDER), así como por la UPVEHU mediante el proyecto EHU16/18

    Detección de fibrilación ventricular mediante técnicas de aprendizaje profundo

    Get PDF
    Detección de fibrilación ventricular mediante técnicas de aprendizaje profundo La detección de arritmias ventriculares, en particular la fibrilación ventricular (FV), es parte fundamental de los algoritmos de clasificación de arritmias de los desfibriladores. Dichos algoritmos deciden si administrar la descarga de desfibrilación, para lo que clasifican los ritmos en desfibrilables (Sh) o no desfibrilables (NSh). Este trabajo propone un nuevo abordaje para la clasificación Sh/NSh de ritmos basado en un sistema de aprendizaje profundo. Para el trabajo se emplearon tres bases de datos públicas de la plataforma Physionet (CUDB, VFDB y AHADB), y se extrajeron segmentos de 4 y 8 segundos. Se anotaron los segmentos como Sh y NSh en base a las anotaciones de las bases de datos, que fueron auditadas por expertos. Los datos se dividieron por paciente en 80% para desarrollar los algoritmos y 20% para evaluación. El sistema de aprendizaje profundo emplea dos etapas convolucionales seguidas de, una red longshort- term-memory y una etapa final de clasificación basada en red neuronal. A modo de referencia se optimizó un clasificador SVM basado en las características de detección de arritmias ventriculares más eficientes publicadas en la literatura. Se calculó la sensibilidad (Se), ritmos desfibrilables, especificidad (Sp), ritmos no desfibrilables, y la precisión (Acc). El método de aprendizaje profundo proporcionó Se, Sp y Acc de 98.5%, 99.4% y 99.2% para segmentos de 4 segundos y 99.7%, 98.9%, 99.1% para segmentos de 8 segundos. El algoritmo permite detectar FV de forma fiable con segmentos de 4 segundos, corrigiendo un 30% de los errores del método basado en SVM.Este trabajo ha sido financiado por el Ministerio de Economía y Competitividad mediante el proyecto TEC2015-64678R junto con el Fondo Europeo de Desarrollo Regional (FEDER), así como por la UPVEHU mediante el proyecto EHU16/18

    Automatic Red-Channel underwater image restoration.

    Get PDF
    Underwater images typically exhibit color distortion and low contrast as a result of the exponential decay that light suffers as it travels. Moreover, colors associated to different wavelengths have different attenuation rates, being the red wavelength the one that attenuates the fastest. To restore underwater images, we propose a Red Channel method, where colors associated to short wavelengths are recovered, as expected for underwater images, leading to a recovery of the lost contrast. The Red Channel method can be interpreted as a variant of the Dark Channel method used for images degraded by the atmosphere when exposed to haze. Experimental results show that our technique handles gracefully artificially illuminated areas, and achieves a natural color correction and superior or equivalent visibility improvement when compared to other state-of-the-art methods

    Self-supervised Blur Detection from Synthetically Blurred Scenes

    Get PDF
    Blur detection aims at segmenting the blurred areas of a given image. Recent deep learning-based methods approach this problem by learning an end-to-end mapping between the blurred input and a binary mask representing the localization of its blurred areas. Nevertheless, the effectiveness of such deep models is limited due to the scarcity of datasets annotated in terms of blur segmentation, as blur annotation is labour intensive. In this work, we bypass the need for such annotated datasets for end-to-end learning, and instead rely on object proposals and a model for blur generation in order to produce a dataset of synthetically blurred images. This allows us to perform self-supervised learning over the generated image and ground truth blur mask pairs using CNNs, defining a framework that can be employed in purely self-supervised, weakly supervised or semi-supervised configurations. Interestingly, experimental results of such setups over the largest blur segmentation datasets available show that this approach achieves state of the art results in blur segmentation, even without ever observing any real blurred image.This research was partially funded by the Basque Government’s Industry Department under the ELKARTEK program’s project ONKOIKER under agreement KK2018/00090. We thank the Spanish project TIN2016- 79717-R and mention Generalitat de Catalunya CERCA Program
    corecore